home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Tech Arsenal 1
/
Tech Arsenal (Arsenal Computer).ISO
/
tek-01
/
neuron.zip
/
NEURON.DOC
< prev
next >
Wrap
Text File
|
1992-07-06
|
4KB
|
117 lines
******************************************************************
NEURON.DOC
******************************************************************
Version 2.5 E.FARHI 07/92 TC 2.0
This is a documentation for the file NEURON.C .
You should find in it a short description of structures and procedures.
First of all, the library modules used are entirely portable.
*** Defines
NB_LAYERS_MAXI
Maximum number of different layers in the net.
NB_LEVELS_MAXI
Idem with levels. A level is a logic layer. See NETWORK.DOC.
NB_N_LAYER_MAXI
Maximum number of neuron in a layer.
NB_BACKP_MAXI
Number of backpropagations before activating the shaker.
LEARN_FAIL
Maximum number of learning turns.
TEMPERING_OFF
Simulated tempering inactivated.
TEMPERING_ON
Try to guess...
MSG_???
Different global messages for communications between procedures.
*** Structures
FLAG_N
Neuron flag. Contains activity. Can be extended.
FLAG_L
Layer flag.
NEURONS
Neuron structure. Data and links inside the neuron layer.
NETWORKS
Network structure. 'inter' is the link(weight) between two layers.
This is the big part of the file.
*** Procedures
*list_norm(size).
Creates a normal list: 1,2,..,size.
*list_alea(size,nb_proto)
Creates a random list of prototypes.
in_limits(value,high,low)
Tests if 'value' is between high and low.
weight_sum(net,neuron,level)
Computes the synaptic potential of a neuron in 'level'.
level_state(net,level)
Computes the level state...
level_tempering(net,level)
Simulated tempering for a level.
network_state(net,input)
neuron_state(net,synaptic_sum)
backpropagation(net,input,output)
Reakons one learning turn by gradient algorithm for the net.
learn_prototype(net,list,proto_nb,inputs,outputs,nb_backp)
Learns 'nb_backp' times the prototypes 'proto_nb' in the list
of inputs-outputs.
learn_list(net,list,size,inputs,outputs)
Learns a list once.
learn_opt(net,list,size,inputs,ouputs)
Optimized learning of a list. The best...
learn_norm(..)
learns a list entirely.
learn_test((..)
test if prototypes can be learnt.
init_alea_network(net)
Initialize a random net.
init_0_network(net)
0 net.
init_flag_network(..)
Global init of net flags.
init_var_network(..)
Init of main net vars.
compare_output(..)
learn_verify(..)
verifies if a list is well known.
sqr_sum(net)
Square sum of links(weights).
print_network(net)
Prints the net.
info_struct_network(net)
Prints some general structure info.
info_vars_network(net)
prints main vars of the net.
*** The network models.
With the data structure used, it is possible to create a network
containing some physical and logical layers, Hopfield or Precepton.
A LOGICAL layer ('level') is a physical layer placed at a certain level
of the network. So usually, we define some PHYSICAL layers ('layers'), three
are enough for many applications, and we give an order to create the logical
levels.
The first level is the input, and the last, the output.
Here are some order exemples:
{0} One layer, one level. To use it, you need to activate
interconnection=1 for a Hopfield network.
{0,1} Preceptron network. No hidden layers.
{0,1,2} 3 layer multi-preceptron network. Simple and efficient.
{0,1,0} 3 levels, 2 layers. There is a partial feedback.
{0,1,2,1,0}, {0,1,2,0}, ...
The partial feedback (input layer=output layer) enables a sequence
recognition.
For each level, you can choose to use a Preceptron (interconnection=0)
or a Hopfield (interconnection=1) layer.
(next to read: NETWORK.DOC)